Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization

نویسندگان

چکیده

Considering the constrained stochastic optimization problem over a time-varying random network, where agents are to collectively minimize sum of objective functions subject common constraint set, we investigate asymptotic properties distributed algorithm based on dual averaging gradients. Different from most existing works algorithms that mainly focused their non-asymptotic properties, prove not only almost sure convergence and rate convergence, but also normality efficiency algorithm. Firstly, for general convex consensus can be achieved estimates converge same optimal point. For case linear optimization, show mirror map averaged sequence identifies active constraints solution with probability 1, which helps us then establish Furthermore, verify is asymptotically optimal. To best our knowledge, it first result algorithms. Finally, numerical example provided justify theoretical analysis.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal Regularized Dual Averaging Methods for Stochastic Optimization

This paper considers a wide spectrum of regularized stochastic optimization problems where both the loss function and regularizer can be non-smooth. We develop a novel algorithm based on the regularized dual averaging (RDA) method, that can simultaneously achieve the optimal convergence rates for both convex and strongly convex loss. In particular, for strongly convex loss, it achieves the opti...

متن کامل

Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization

We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as !1-norm for promoting sparsity. We develop extensions of Nesterov’s dual averaging method, that can exploit the regularization structure in an online setting...

متن کامل

Dual Averaging Method for Regularized Stochastic Learning and Online Optimization

We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop a new online algorithm, the regularized dual averaging (RDA) method, that can explicitly exploit the regularizatio...

متن کامل

Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization

In this paper, we address the problem of distributed learning over a decentralized network, arising from scenarios including distributed sensors or geographically separated data centers. We propose a fully distributed algorithm called random walk distributed dual averaging (RW-DDA) that only requires local updates. Our RW-DDA method, improves the existing distributed dual averaging (DDA) method...

متن کامل

Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization

In this paper, we address the problem of distributed learning over a large number of distributed sensors or geographically separated data centers, which suffer from sampling biases across nodes. We propose an algorithm called random walk distributed dual averaging (RW-DDA) method that only requires local updates and is fully distributed. Our RW-DDA method is robust to the change in network topo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Systems & Control Letters

سال: 2022

ISSN: ['1872-7956', '0167-6911']

DOI: https://doi.org/10.1016/j.sysconle.2022.105252